Goto

Collaborating Authors

 Androscoggin County


The 'Farmer's Almanac' says goodbye after 208 years

Popular Science

Environment Agriculture The'Farmer's Almanac' says goodbye after 208 years The 2026 edition will be its last. Breakthroughs, discoveries, and DIY tips sent every weekday. After more than 200 years of weather wisdom, folklore, and time-tested advice, editors announced that the 2026 will be the last edition. The website will remain operational through the end of December 2025. "Many of you grew up hearing your parents or grandparents quote from the, always having a copy nearby. Maybe you have planted by our Moon phases, consulted the for the'Best Days' to potty train, wean, or go fishing," Editor Sandi Duncan and Editor Emeritus Peter Geiger wrote in the announcement.

  Country:
  Industry:

Improving Zero-shot Sentence Decontextualisation with Content Selection and Planning

Deng, Zhenyun, Chen, Yulong, Vlachos, Andreas

arXiv.org Artificial Intelligence

Extracting individual sentences from a document as evidence or reasoning steps is commonly done in many NLP tasks. However, extracted sentences often lack context necessary to make them understood, e.g., coreference and background information. To this end, we propose a content selection and planning framework for zero-shot decontextualisation, which determines what content should be mentioned and in what order for a sentence to be understood out of context. Specifically, given a potentially ambiguous sentence and its context, we first segment it into basic semantically-independent units. We then identify potentially ambiguous units from the given sentence, and extract relevant units from the context based on their discourse relations. Finally, we generate a content plan to rewrite the sentence by enriching each ambiguous unit with its relevant units. Experimental results demonstrate that our approach is competitive for sentence decontextualisation, producing sentences that exhibit better semantic integrity and discourse coherence, outperforming existing methods.


Gen AI in Proof-based Math Courses: A Pilot Study

Klawa, Hannah, Rajpal, Shraddha, Thomas, Cigole

arXiv.org Artificial Intelligence

With the rapid rise of generative AI in higher education and the unreliability of current AI detection tools, developing policies that encourage student learning and critical thinking has become increasingly important. This study examines student use and perceptions of generative AI across three proof-based undergraduate mathematics courses: a first-semester abstract algebra course, a topology course and a second-semester abstract algebra course. In each case, course policy permitted some use of generative AI. Drawing on survey responses and student interviews, we analyze how students engaged with AI tools, their perceptions of generative AI's usefulness and limitations, and what implications these perceptions hold for teaching proof-based mathematics. We conclude by discussing future considerations for integrating generative AI into proof-based mathematics instruction.


Seemingly Plausible Distractors in Multi-Hop Reasoning: Are Large Language Models Attentive Readers?

Bhuiya, Neeladri, Schlegel, Viktor, Winkler, Stefan

arXiv.org Artificial Intelligence

State-of-the-art Large Language Models (LLMs) are accredited with an increasing number of different capabilities, ranging from reading comprehension, over advanced mathematical and reasoning skills to possessing scientific knowledge. In this paper we focus on their multi-hop reasoning capability: the ability to identify and integrate information from multiple textual sources. Given the concerns with the presence of simplifying cues in existing multi-hop reasoning benchmarks, which allow models to circumvent the reasoning requirement, we set out to investigate, whether LLMs are prone to exploiting such simplifying cues. We find evidence that they indeed circumvent the requirement to perform multi-hop reasoning, but they do so in more subtle ways than what was reported about their fine-tuned pre-trained language model (PLM) predecessors. Motivated by this finding, we propose a challenging multi-hop reasoning benchmark, by generating seemingly plausible multi-hop reasoning chains, which ultimately lead to incorrect answers. We evaluate multiple open and proprietary state-of-the-art LLMs, and find that their performance to perform multi-hop reasoning is affected, as indicated by up to 45% relative decrease in F1 score when presented with such seemingly plausible alternatives. We conduct a deeper analysis and find evidence that while LLMs tend to ignore misleading lexical cues, misleading reasoning paths indeed present a significant challenge.


DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models

Su, Weihang, Tang, Yichen, Ai, Qingyao, Wu, Zhijing, Liu, Yiqun

arXiv.org Artificial Intelligence

Dynamic retrieval augmented generation (RAG) paradigm actively decides when and what to retrieve during the text generation process of Large Language Models (LLMs). There are two key elements of this paradigm: identifying the optimal moment to activate the retrieval module (deciding when to retrieve) and crafting the appropriate query once retrieval is triggered (determining what to retrieve). However, current dynamic RAG methods fall short in both aspects. Firstly, the strategies for deciding when to retrieve often rely on static rules. Moreover, the strategies for deciding what to retrieve typically limit themselves to the LLM's most recent sentence or the last few tokens, while the LLM's real-time information needs may span across the entire context. To overcome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic Retrieval Augmented Generation based on the real-time Information Needs of LLMs. Our framework is specifically designed to make decisions on when and what to retrieve based on the LLM's real-time information needs during the text generation process. We evaluate DRAGIN along with existing methods comprehensively over 4 knowledge-intensive generation datasets. Experimental results show that DRAGIN achieves superior performance on all tasks, demonstrating the effectiveness of our method. We have open-sourced all the code, data, and models in GitHub: https://github.com/oneal2000/DRAGIN/tree/main


At least 22 killed in Maine mass shooting, Israel conducts Gaza ground incursion and more top headlines

FOX News

The Lewiston Police Department identified Robert Card, 40, as a person of interest in connection with a mass shooting in Lewiston, Maine, Wednesday night. MAINE MANHUNT – Tens of thousands on lockdown as cops expand hunt for person of interest after 22 killed in massacre. BOOTS ON THE GROUND – Hamas hostage count rises as Israel pulls off most daring raid yet for'next stages.' READY OR NOT– No honeymoon for newly-elected speaker as he faces dilemmas from day one. CONCERNING – Senator says FBI received'criminal information' from over 40 confidential sources on Joe Biden, Hunter, James.


Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy

Shao, Zhihong, Gong, Yeyun, Shen, Yelong, Huang, Minlie, Duan, Nan, Chen, Weizhu

arXiv.org Artificial Intelligence

Large language models are powerful text processors and reasoners, but are still subject to limitations including outdated knowledge and hallucinations, which necessitates connecting them to the world. Retrieval-augmented large language models have raised extensive attention for grounding model generation on external knowledge. However, retrievers struggle to capture relevance, especially for queries with complex information needs. Recent work has proposed to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation. In this paper, we show that strong performance can be achieved by a method we call Iter-RetGen, which synergizes retrieval and generation in an iterative manner. A model output shows what might be needed to finish a task, and thus provides an informative context for retrieving more relevant knowledge which in turn helps generate a better output in the next iteration. Compared with recent work which interleaves retrieval with generation when producing an output, Iter-RetGen processes all retrieved knowledge as a whole and largely preserves the flexibility in generation without structural constraints. We evaluate Iter-RetGen on multi-hop question answering, fact verification, and commonsense reasoning, and show that it can flexibly leverage parametric knowledge and non-parametric knowledge, and is superior to or competitive with state-of-the-art retrieval-augmented baselines while causing fewer overheads of retrieval and generation. We can further improve performance via generation-augmented retrieval adaptation.


Got robots? These hospitals do

#artificialintelligence

Dr. Boris Kovalenko, an orthopedic surgeon, stands June 30 with a robot that assists with knee replacements at St. Mary's Regional Medical Center in Lewiston. A robot can probably help with that. Robotics-assisted technology is becoming more common in operating rooms across Maine, with St. Mary's Regional Medical Center in Lewiston recently joining the list of hospitals employing the new methods. "It kind of is a fundamentally different way of doing a knee replacement," Dr. Boris Kovalenko, an orthopedic surgeon at St. Mary's, said earlier this summer. Kovalenko was standing in an operating room at St. Mary's, holding what looked more like a tricked-out hot glue gun attached to a rolling TV cart than an advanced piece of surgical technology.


NJ students: Virtual National Leadership Conference held

#artificialintelligence

Family, Career and Community Leaders of America (FCCLA) held its first-ever virtual National Leadership Conference since the organization was founded in 1945. This year's virtual National Leadership Conference was a memorable chapter in FCCLA's story as the organization kicked off its 75th anniversary celebration and honored FCCLA's Class of 2020. Instead of gathering in-person, 65 of Edison's John P. Stevens High School FCCLA students logged on to an online platform from Tuesday, July 7, through Thursday, July 9, which blended virtual reality and gamification technology to transform FCCLA's National Leadership Conference (NLC) into an "on-demand" virtual experience. This year's historic virtual NLC included keynote speakers, breakout sessions, leadership training, a College Fair and EXPO, STAR Events recognition, networking opportunities, adviser professional development, and many more activities. With just a click of a button, members had the opportunity to participate in the Ultimate Leadership Experience in the comfort and safety of their home.